940 research outputs found
Prediction error identification of linear dynamic networks with rank-reduced noise
Dynamic networks are interconnected dynamic systems with measured node
signals and dynamic modules reflecting the links between the nodes. We address
the problem of \red{identifying a dynamic network with known topology, on the
basis of measured signals}, for the situation of additive process noise on the
node signals that is spatially correlated and that is allowed to have a
spectral density that is singular. A prediction error approach is followed in
which all node signals in the network are jointly predicted. The resulting
joint-direct identification method, generalizes the classical direct method for
closed-loop identification to handle situations of mutually correlated noise on
inputs and outputs. When applied to general dynamic networks with rank-reduced
noise, it appears that the natural identification criterion becomes a weighted
LS criterion that is subject to a constraint. This constrained criterion is
shown to lead to maximum likelihood estimates of the dynamic network and
therefore to minimum variance properties, reaching the Cramer-Rao lower bound
in the case of Gaussian noise.Comment: 17 pages, 5 figures, revision submitted for publication in
Automatica, 4 April 201
The CLIC Programme: Towards a Staged e+e- Linear Collider Exploring the Terascale : CLIC Conceptual Design Report
This report describes the exploration of fundamental questions in particle
physics at the energy frontier with a future TeV-scale e+e- linear collider
based on the Compact Linear Collider (CLIC) two-beam acceleration technology. A
high-luminosity high-energy e+e- collider allows for the exploration of
Standard Model physics, such as precise measurements of the Higgs, top and
gauge sectors, as well as for a multitude of searches for New Physics, either
through direct discovery or indirectly, via high-precision observables. Given
the current state of knowledge, following the observation of a 125 GeV
Higgs-like particle at the LHC, and pending further LHC results at 8 TeV and 14
TeV, a linear e+e- collider built and operated in centre-of-mass energy stages
from a few-hundred GeV up to a few TeV will be an ideal physics exploration
tool, complementing the LHC. In this document, an overview of the physics
potential of CLIC is given. Two example scenarios are presented for a CLIC
accelerator built in three main stages of 500 GeV, 1.4 (1.5) TeV, and 3 TeV,
together with operating schemes that will make full use of the machine capacity
to explore the physics. The accelerator design, construction, and performance
are presented, as well as the layout and performance of the experiments. The
proposed staging example is accompanied by cost estimates of the accelerator
and detectors and by estimates of operating parameters, such as power
consumption. The resulting physics potential and measurement precisions are
illustrated through detector simulations under realistic beam conditions.Comment: 84 pages, published as CERN Yellow Report
https://cdsweb.cern.ch/record/147522
Model-Based Predictive Control Scheme for Cost Optimization and Balancing Services for Supermarket Refrigeration Systems
A new formulation of model predictive control for supermarket refrigeration systems is proposed to facilitate the regulatory power services as well as energy cost optimization of such systems in the smart grid. Nonlinear dynamics existed in large-scale refrigeration plants challenges the predictive control design. It is however shown that taking into account the knowledge of different time scales in the dynamical subsystems makes possible a linear formulation of a centralized predictive controller. A realistic scenario of regulatory power services in the smart grid is considered and formulated in the same objective as of cost optimization one. A simulation benchmark validated against real data and including significant dynamics of the system are employed to show the effectiveness of the proposed control scheme
The benefits of spatial resolution increase in global simulations of the hydrological cycle evaluated for the Rhine and Mississippi basins
To study the global hydrological cycle and its response to a
changing climate, we rely on global climate models (GCMs) and global
hydrological models (GHMs). The spatial resolution of these models is
restricted by computational resources and therefore limits the processes and
level of detail that can be resolved. Increase in computer power therefore
permits increase in resolution, but it is an open question where this
resolution is invested best: in the GCM or GHM. In this study, we evaluated
the benefits of increased resolution, without modifying the representation of
physical processes in the models. By doing so, we can evaluate the benefits
of resolution alone. We assess and compare the benefits of an increased
resolution for a GCM and a GHM for two basins with long observational
records: the Rhine and Mississippi basins. Increasing the resolution of a GCM
(1.125Â to 0.25â) results in an improved precipitation budget over the
Rhine basin, attributed to a more realistic large-scale circulation. These
improvements with increased resolution are not found for the Mississippi
basin, possibly because precipitation is strongly dependent on the
representation of still unresolved convective processes. Increasing the
resolution of the GCM improved the simulations of the monthly-averaged
discharge for the Rhine, but did not improve the representation of extreme
streamflow events. For the Mississippi basin, no substantial differences in
precipitation and discharge were found with the higher-resolution GCM
and GHM. Increasing the
resolution of parameters describing vegetation and orography in the
high-resolution GHM (from 0.5Â to 0.05â) shows no significant
differences in discharge for both basins. A straightforward resolution
increase in the GHM is thus most likely not the best method to improve
discharge predictions, which emphasizes the need for better representation of
processes and improved parameterizations that go hand in hand with resolution
increase in a GHM.</p
Abstractions of linear dynamic networks for input selection in local module identification
In abstractions of linear dynamic networks, selected node signals are removed
from the network, while keeping the remaining node signals invariant. The
topology and link dynamics, or modules, of an abstracted network will generally
be changed compared to the original network. Abstractions of dynamic networks
can be used to select an appropriate set of node signals that are to be
measured, on the basis of which a particular local module can be estimated. A
method is introduced for network abstraction that generalizes previously
introduced algorithms, as e.g. immersion and the method of indirect inputs. For
this abstraction method it is shown under which conditions on the selected
signals a particular module will remain invariant. This leads to sets of
conditions on selected measured node variables that allow identification of the
target module.Comment: 17 pages, 7 figures. Paper to appear in Automatica, Vol. 117, July
202
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
This is the published version. Copyright 2014 American Geophysical UnionThis paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based âlocalâ methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative âbucket-styleâ hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models
Estimation of uncertainty in flood forecasts - a comparison of methods
The scientific literature has many methods for estimating uncertainty, however, there is a lack of information about the characteristics, merits and limitations of the individual methods, particularly for making decisions in practice. This paper provides an overview of the different uncertainty methods for flood forecasting that are reported in literature, concentrating on two established approaches defined as the ensemble and the statistical approach. Owing to the variety of flood forecasting and warning systems in operation, the question âwhich uncertainty method is most suitable for which applicationâ is difficult to answer readily. The paper aims to assist practitioners in understanding how to match an uncertainty quantification method to their particular application using two flood forecasting system case studies in Belgium and Canada. These two specific applications of uncertainty estimation from the literature are compared, illustrating statistical and ensemble methods, and indicating the information and output that these two types of methods offer. The advantages, disadvantages and application of the two different types of method are identified. Although there is no one âbestâ uncertainty method to fit all forecasting systems, this review helps to explain the current commonly used methods from the available literature for the non-specialist
Continental and global scale flood forecasting systems
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction
Attenuated live infectious bronchitis virus QX vaccine disseminates slowly to target organs distant from the site of inoculation
Infectious bronchitis (IB) is a highly contagious respiratory disease of poultry, caused by the avian coronavirus infectious bronchitis virus (IBV). Currently, one of the most relevant genotypes circulating worldwide is IBV-QX (GI-19), for which vaccines have been developed by passaging virulent QX strains in embryonated chicken eggs. Here we explored the attenuated phenotype of a commercially available QX live vaccine, IB Primo QX, in specific pathogens free broilers. At hatch, birds were inoculated with QX vaccine or its virulent progenitor IBV-D388, and postmortem swabs and tissues were collected each day up to eight days post infection to assess viral replication and morphological changes. In the trachea, viral RNA replication and protein expression were comparable in both groups. Both viruses induced morphologically comparable lesions in the trachea, albeit with a short delay in the vaccinated birds. In contrast, in the kidney, QX vaccine viral RNA was nearly absent, which coincided with the lack of any morphological changes in this organ. This was in contrast to high viral RNA titers and abundant lesions in the kidney after IBV D388 infection. Furthermore, QX vaccine showed reduced ability to reach and replicate in conjunctivae and intestines including cloaca, resulting in significantly lower titers and delayed protein expression, respectively. Nephropathogenic IBVs might reach the kidney also via an ascending route from the cloaca, based on our observation that viral RNA was detected in the cloaca one day before detection in the kidney. In the kidney distal tubular segments, collecting ducts and ureter were positive for viral antigen. Taken together, the attenuated phenotype of QX vaccine seems to rely on slower dissemination and lower replication in target tissues other than the site of inoculation
- âŠ